Reduced Basis Greedy Selection Using Random Training Sets

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Online Greedy Reduced Basis Construction Using Dictionaries

As numerical simulations find more and more use in real-world scenarios and industrial applications, demands concerning efficiency and reliability increase as well. Especially scenarios that call for real-time simulations or multi-query evaluations of partial differential equations (PDEs) often require means of model order reduction. Examples for such scenarios are optimal control and optimizat...

متن کامل

Lecture 10 : Random Ordering and Greedy Selection

for some constant C. We may want to colour every vertex uniformly at random. Then we will get m21−k many monochromatic edges in expectation. An improvement upon the basic method can be obtained by randomly recolouring all monochromatic edges after the first randomization. In a 1978 proof, Beck used this argument to show that m(k) ≥ Ω(k2). In 2000, Radhakrishnan and Srinivasan improved it to m(k...

متن کامل

Convergence Rates for Greedy Algorithms in Reduced Basis Methods

The reduced basis method was introduced for the accurate online evaluation of solutions to a parameter dependent family of elliptic partial differential equations. Abstractly, it can be viewed as determining a “good” n dimensional space Hn to be used in approximating the elements of a compact set F in a Hilbert space H. One, by now popular, computational approach is to find Hn through a greedy ...

متن کامل

Locally Adaptive Greedy Approximations for Anisotropic Parameter Reduced Basis Spaces

Reduced order models, in particular the reduced basis method, rely on empirically built and problem dependent basis functions that are constructed during an off-line stage. In the on-line stage, the precomputed problem-dependent solution space, that is spanned by the basis functions, can then be used in order to reduce the size of the computational problem. For complex problems, the number of b...

متن کامل

Improved Generalization with Reduced Training Sets

This paper presents an incremental learning procedure for improving generalization performance of multilayer feedforward networks. Whereas most existing algorithms. try to reduce the size of the network to improve the likelihood of finding solutions of good generalization, our method. constrains the search by incremental selection of training examples according to the estimated usefulness, call...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: ESAIM: Mathematical Modelling and Numerical Analysis

سال: 2020

ISSN: 0764-583X,1290-3841

DOI: 10.1051/m2an/2020004